Goto

Collaborating Authors

 ai ethics principle


Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms

Rismani, Shalaleh, Shelby, Renee, Davis, Leah, Rostamzadeh, Negar, Moon, AJung

arXiv.org Artificial Intelligence

Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles - fairness, transparency, privacy, and trust - and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.


Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework

Lee, Sung Une, Perera, Harsha, Liu, Yue, Xia, Boming, Lu, Qinghua, Zhu, Liming, Cairns, Jessica, Nottage, Moana

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is a widely developed and adopted technology across entire industry sectors. Integrating environmental, social, and governance (ESG) considerations with AI investments is crucial for ensuring ethical and sustainable technological advancement. Particularly from an investor perspective, this integration not only mitigates risks but also enhances long-term value creation by aligning AI initiatives with broader societal goals. Yet, this area has been less explored in both academia and industry. To bridge the gap, we introduce a novel ESG-AI framework, which is developed based on insights from engagements with 28 companies and comprises three key components. The framework provides a structured approach to this integration, developed in collaboration with industry practitioners. The ESG-AI framework provides an overview of the environmental and social impacts of AI applications, helping users such as investors assess the materiality of AI use. Moreover, it enables investors to evaluate a company's commitment to responsible AI through structured engagements and thorough assessment of specific risk areas. We have publicly released the framework and toolkit in April 2024, which has received significant attention and positive feedback from the investment community. This paper details each component of the framework, demonstrating its applicability in real-world contexts and its potential to guide ethical AI investments.


Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment

Lee, Sung Une, Perera, Harsha, Liu, Yue, Xia, Boming, Lu, Qinghua, Zhu, Liming

arXiv.org Artificial Intelligence

The rapid growth of Artificial Intelligence (AI) has underscored the urgent need for responsible AI practices. Despite increasing interest, a comprehensive AI risk assessment toolkit remains lacking. This study introduces our Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives. By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks, aligning with emerging regulations like the EU AI Act, and enhancing overall AI governance. A key benefit of the RAI Question Bank is its systematic approach to linking lower-level risk questions to higher-level ones and related themes, preventing siloed assessments and ensuring a cohesive evaluation process. Case studies illustrate the practical application of the RAI Question Bank in assessing AI projects, from evaluating risk factors to informing decision-making processes. The study also demonstrates how the RAI Question Bank can be used to ensure compliance with standards, mitigate risks, and promote the development of trustworthy AI systems. This work advances RAI by providing organizations with a valuable tool to navigate the complexities of ethical AI development and deployment while ensuring comprehensive risk management.


Resolving Ethics Trade-offs in Implementing Responsible AI

Sanderson, Conrad, Schleiger, Emma, Douglas, David, Kuhnert, Petra, Lu, Qinghua

arXiv.org Artificial Intelligence

While the operationalisation of high-level AI ethics principles into practical AI/ML systems has made progress, there is still a theory-practice gap in managing tensions between the underlying AI ethics aspects. We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex. The approaches differ in the types of considered context, scope, methods for measuring contexts, and degree of justification. None of the approaches is likely to be appropriate for all organisations, systems, or applications. To address this, we propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions. The proposed framework aims to facilitate the implementation of well-rounded AI/ML systems that are appropriate for potential regulatory requirements.


Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

Duenser, Andreas, Douglas, David M.

arXiv.org Artificial Intelligence

We present an overview of the literature on trust in AI and AI trustworthiness and argue for the need to distinguish these concepts more clearly and to gather more empirically evidence on what contributes to people s trusting behaviours. We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system s trustworthiness is not as abundance or not that clear. AI systems should be recognised as socio-technical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognising these nuances, trust in AI and trustworthy AI risk becoming nebulous terms for any desirable feature for AI systems.


Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

Sanderson, Conrad, Douglas, David, Lu, Qinghua

arXiv.org Artificial Intelligence

Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.


Operationalizing Responsible AI at Scale: CSIRO Data61's Pattern-Oriented Responsible AI Engineering Approach

Communications of the ACM

For the world to realize the benefits brought by AI, it is important to ensure artificial intelligent (AI) systems are responsibly developed, used throughout their entire life cycle, and trusted by the humans expected to rely on them.1 The goal for AI adoption has triggered a significant national effort to realize responsible AI (RAI) in Australia. CSIRO Data61 is the data and digital specialist arm of Australia's national science agency. In 2019, CSIRO Data61's worked with the Australian government to conduct the AI Ethics Framework research. This work led to the release of eight AI ethics principles to ensure Australia's adoption of AI is safe, secure, and reliable.a It is challenging to turn high-level AI ethics principles into real-life practices.


AI Ethics And AI Law Clarifying What In Fact Is Trustworthy AI

#artificialintelligence

Will we be able to achieve trustworthy AI, and if so, how. Trust is everything, so they say. The noted philosopher Lao Tzu said that those who do not trust enough will not be trusted. Ernest Hemingway, an esteemed novelist, stated that the best way to find out if you can trust somebody is by trusting them. Meanwhile, it seems that trust is both precious and brittle. The trust that one has can collapse like a house of cards or suddenly burst like a popped balloon. The ancient Greek tragedian Sophocles asserted that trust dies but mistrust blossoms. French philosopher and mathematician Descartes contended that it is prudent never to trust wholly those who have deceived us even once. Billionaire business investor extraordinaire Warren Buffett exhorted that it takes twenty years to build a trustworthy reputation and five minutes to ruin it. You might be surprised to know that all of these varied views and provocative opinions about trust are crucial to the advent of Artificial Intelligence (AI). Yes, there is something keenly referred to as trustworthy AI that keeps getting a heck of a lot of attention these days, including handwringing catcalls from within the field of AI and also boisterous outbursts by those outside of the AI realm. The overall notion entails whether or not society is going to be willing to place trust in the likes of AI systems. Presumably, if society won't or can't trust AI, the odds are that AI systems will fail to get traction.


Kingdom of Saudi Arabia develop AI ethics principles

#artificialintelligence

RIYADH:- The Kingdom of Saudi Arabia proudly announces its AI Ethics Principles for public consultation. They were designed by the Saudi Data and Artificial Intelligence Authority (SDAIA) to be a practical guide to incorporating AI ethics throughout the AI system development life cycle. AI Ethics principles recognize the importance of developing artificial intelligence and technology innovation into the Kingdom's services for its citizens and visitors. After analyzing global and domestic standards and guidelines for AI use, SDAIA has developed an operational framework that entities can use to promote AI while limiting the technology's irresponsible use. AI ethics will provide a common ground or standards to help the Kingdom avoid or reduce technology limitations.


AI Ethics Saying That AI Should Be Especially Deployed When Human Biases Are Aplenty

#artificialintelligence

Trying to overcome human untoward biases by replacing with AI is not as straightforward as it might ... [ ] seem. Humans have got to know their limitations. You might recall the akin famous line about knowing our limitations as grittily uttered by the character Dirty Harry in the 1973 movie entitled Magnum Force (per the spoken words of actor Clint Eastwood in his memorable role as Inspector Harry Callahan). The overall notion is that sometimes we tend to overlook our own limits and get ourselves into hot water accordingly. Whether due to hubris, being egocentric, or simply blind to our own capabilities, the precept of being aware of and taking into explicit account our proclivities and shortcomings is abundantly sensible and helpful. Let's add a new twist to the sage piece of advice. Artificial Intelligence (AI) has got to know its limitations. What do I mean by that variant of the venerated catchphrase? Turns out that the initial rush to get modern-day AI into use as a hopeful solver of the world's problems has become sullied and altogether muddied by the realization that today's AI does have some rather severe limitations. We went from the uplifting headlines of AI For Good and have increasingly found ourselves mired in AI For Bad. You see, many AI systems have been developed and fielded with all sorts of untoward racial and gender biases, and a myriad of other such appalling inequities.